statistical efficiency
We would like to thank the reviewers for their constructive feedbacks and we will correct the typos raised and include
Full (exact) conformal set vs. split or cross-validated conformal set Non-connectedness of the conformal prediction set. This was initially suggested in [18, Remark 1]. We follow the actual practice in the literature [14, Remark 5]. We did not observe violations. We will also summarize the proposed algorithm in a direct pseudo-code.
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL
We study reward-free reinforcement learning (RL) under general non-linear function approximation, and establish sample efficiency and hardness results under various standard structural assumptions. On the positive side, we propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration under minimal structural assumptions, which covers the previously studied settings of linear MDPs (Jin et al., 2020b), linear completeness (Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et al., 2021). Our analyses indicate that the explorability or reachability assumptions, previously made for the latter two settings, are not necessary statistically for reward-free exploration. On the negative side, we provide a statistical hardness result for both reward-free and reward-aware exploration under linear completeness assumptions when the underlying features are unknown, showing an exponential separation between low-rank and linear completeness settings.
Generalized Linear Bandits: Almost Optimal Regret with One-Pass Update
Zhang, Yu-Jie, Xu, Sheng-An, Zhao, Peng, Sugiyama, Masashi
We study the generalized linear bandit (GLB) problem, a contextual multi-armed bandit framework that extends the classical linear model by incorporating a non-linear link function, thereby modeling a broad class of reward distributions such as Bernoulli and Poisson. While GLBs are widely applicable to real-world scenarios, their non-linear nature introduces significant challenges in achieving both computational and statistical efficiency. Existing methods typically trade off between two objectives, either incurring high per-round costs for optimal regret guarantees or compromising statistical efficiency to enable constant-time updates. In this paper, we propose a jointly efficient algorithm that attains a nearly optimal regret bound with $\mathcal{O}(1)$ time and space complexities per round. The core of our method is a tight confidence set for the online mirror descent (OMD) estimator, which is derived through a novel analysis that leverages the notion of mix loss from online prediction. The analysis shows that our OMD estimator, even with its one-pass updates, achieves statistical efficiency comparable to maximum likelihood estimation, thereby leading to a jointly efficient optimistic method.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.54)
Statistical Efficiency of Distributional Temporal Difference Learning
Distributional reinforcement learning (DRL) has achieved empirical success in various domains.One of the core tasks in the field of DRL is distributional policy evaluation, which involves estimating the return distribution \eta \pi for a given policy \pi .The distributional temporal difference learning has been accordingly proposed, whichis an extension of the temporal difference learning (TD) in the classic RL area.In the tabular case, Rowland et al. [2018] and Rowland et al. [2023] proved the asymptotic convergence of two instances of distributional TD, namely categorical temporal difference learning (CTD) and quantile temporal difference learning (QTD), respectively.In this paper, we go a step further and analyze the finite-sample performance of distributional TD.To facilitate theoretical analysis, we propose a non-parametric distributional TD learning (NTD).For a \gamma -discounted infinite-horizon tabular Markov decision process,we show that for NTD we need \widetilde O\left(\frac{1}{\varepsilon {2p}(1-\gamma) {2p 1}}\right) iterations to achieve an \varepsilon -optimal estimator with high probability, when the estimation error is measured by the p -Wasserstein distance.This sample complexity bound is minimax optimal (up to logarithmic factors) in the case of the 1 -Wasserstein distance.To achieve this, we establish a novel Freedman's inequality in Hilbert spaces, which would be of independent interest.In addition, we revisit CTD, showing that the same non-asymptotic convergence bounds hold for CTD in the case of the p -Wasserstein distance.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
The paper marries a sparse approximation to the black-box inference method of [6], and demonstrates its effectiveness on a pleasingly wide variety of problems. I almost can't believe it took us this long for this paper to be written, but I'm happy that it has. It made sense once I realized it was referring to [6], but I wonder if there isn't a better word than "automated" to describe the fact that it can handle black-box likelihood functions. Quality and clarity: The intro was fairly clearly written, although I don't understand why the term "statistical efficiency" is used to describe a bound being well-approximated. Shouldn't statistical efficiency refer to taking full advantage of the available data? How can the computational complexity ( O(M 3)) not depend on the number of datapoints?
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL
We study reward-free reinforcement learning (RL) under general non-linear function approximation, and establish sample efficiency and hardness results under various standard structural assumptions. On the positive side, we propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration under minimal structural assumptions, which covers the previously studied settings of linear MDPs (Jin et al., 2020b), linear completeness (Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et al., 2021). Our analyses indicate that the explorability or reachability assumptions, previously made for the latter two settings, are not necessary statistically for reward-free exploration. On the negative side, we provide a statistical hardness result for both reward-free and reward-aware exploration under linear completeness assumptions when the underlying features are unknown, showing an exponential separation between low-rank and linear completeness settings.